Goto

Collaborating Authors

 online learning


Parameter-Free Online Learning via Model Selection

Neural Information Processing Systems

We introduce an efficient algorithmic framework for model selection in online learning, also known as parameter-free online learning. Departing from previous work, which has focused on highly structured function classes such as nested balls in Hilbert space, we propose a generic meta-algorithm framework that achieves online model selection oracle inequalities under minimal structural assumptions. We give the first computationally efficient parameter-free algorithms that work in arbitrary Banach spaces under mild smoothness assumptions; previous results applied only to Hilbert spaces. We further derive new oracle inequalities for matrix classes, non-nested convex sets, and $\mathbb{R}^{d}$ with generic regularizers. Finally, we generalize these results by providing oracle inequalities for arbitrary non-linear classes in the online supervised learning model. These results are all derived through a unified meta-algorithm scheme using a novel multi-scale algorithm for prediction with expert advice based on random playout, which may be of independent interest.


26e359e83860db1d11b6acca57d8ea88-Paper.pdf

Neural Information Processing Systems

Some recent results do consider residual-like elements (see discussion of related work below),butgenerallydonotapply tostandard architectures.









Gradient-Variation Online Learning under Generalized Smoothness

Neural Information Processing Systems

Gradient-variation online learning aims to achieve regret guarantees that scale with variations in the gradients of online functions, which is crucial for attaining fast convergence in games and robustness in stochastic o ptimization, hence receiving increased attention. Existing results often req uire the smoothness condition by imposing a fixed bound on gradient Lipschitzness, w hich may be unrealistic in practice. Recent efforts in neural network optim ization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms. In this paper, we systematically study gradient-var iation online learning under generalized smoothness. We extend the classic optimi stic mirror descent algorithm to derive gradient-variation regret by analyzin g stability over the optimization trajectory and exploiting smoothness locally. Th en, we explore universal online learning, designing a single algorithm with the optimal gradient-va riation regrets for convex and strongly convex functions simultane ously, without requiring prior knowledge of curvature. This algorithm adopts a tw o-layer structure with a meta-algorithm running over a group of base-learners . To ensure favorable guarantees, we design a new Lipschitz-adaptive meta-a lgorithm, capable of handling potentially unbounded gradients while ensuring a second-order bound to effectively ensemble the base-learners. Finally, we provi de the applications for fast-rate convergence in games and stochastic extended adv ersarial optimization.